提出了一种用于建模粒子促进剂中的梁壁相互作用的无网线方法。我们的方法的关键思想是使用深神经网络作为替代涉及粒子束的一组部分微分方程的替代品,以及表面阻抗概念。所提出的方法应用于具有薄导电涂层的加速器真空室的耦合阻抗,并与现有的分析配方相比验证。
translated by 谷歌翻译
区分乳腺癌的内在亚型对于决定最佳治疗策略至关重要。深度学习可以比常规统计方法更准确地从遗传信息中预测亚型,但是迄今为止,尚未直接利用深度学习来检查哪些基因与哪些亚型相关。为了阐明嵌入在内在亚型中的机制,我们开发了一个可解释的深度学习模型,称为点线性(PWL)模型,该模型为每个患者生成定制的逻辑回归。逻辑回归是医生和医学信息学研究人员都熟悉的,使我们能够分析特征变量的重要性,而PWL模型则利用了逻辑回归的这些实际能力。在这项研究中,我们表明分析乳腺癌亚型对患者有益,也是验证PWL模型能力的最佳方法之一。首先,我们使用RNA-Seq数据训练了PWL模型,以预测PAM50固有的亚型,并通过亚型预测任务将其应用于PAM50的41/50基因。其次,我们开发了一种深度富集分析方法,以揭示PAM50亚型与乳腺癌的拷贝数之间的关系。我们的发现表明,PWL模型利用与细胞周期相关途径相关的基因。这些在乳腺癌亚型分析中取得的初步成功证明了我们分析策略的潜力,以阐明乳腺癌的基础机制并改善整体临床结果。
translated by 谷歌翻译
Classification bandits are multi-armed bandit problems whose task is to classify a given set of arms into either positive or negative class depending on whether the rate of the arms with the expected reward of at least h is not less than w for given thresholds h and w. We study a special classification bandit problem in which arms correspond to points x in d-dimensional real space with expected rewards f(x) which are generated according to a Gaussian process prior. We develop a framework algorithm for the problem using various arm selection policies and propose policies called FCB and FTSV. We show a smaller sample complexity upper bound for FCB than that for the existing algorithm of the level set estimation, in which whether f(x) is at least h or not must be decided for every arm's x. Arm selection policies depending on an estimated rate of arms with rewards of at least h are also proposed and shown to improve empirical sample complexity. According to our experimental results, the rate-estimation versions of FCB and FTSV, together with that of the popular active learning policy that selects the point with the maximum variance, outperform other policies for synthetic functions, and the version of FTSV is also the best performer for our real-world dataset.
translated by 谷歌翻译
Learning-from-Observation (LfO) is a robot teaching framework for programming operations through few-shots human demonstration. While most previous LfO systems run with visual demonstration, recent research on robot teaching has shown the effectiveness of verbal instruction in making recognition robust and teaching interactive. To the best of our knowledge, however, few solutions have been proposed for LfO that utilizes verbal instruction, namely multimodal LfO. This paper aims to propose a practical pipeline for multimodal LfO. For input, an user temporally stops hand movements to match the granularity of human instructions with the granularity of robot execution. The pipeline recognizes tasks based on step-by-step verbal instructions accompanied by demonstrations. In addition, the recognition is made robust through interactions with the user. We test the pipeline on a real robot and show that the user can successfully teach multiple operations from multimodal demonstrations. The results suggest the utility of the proposed pipeline for multimodal LfO.
translated by 谷歌翻译
Drug-Drug Interactions (DDIs) prediction is an essential issue in the molecular field. Traditional methods of observing DDIs in medical experiments require plenty of resources and labor. In this paper, we present a computational model dubbed MedKGQA based on Graph Neural Networks to automatically predict the DDIs after reading multiple medical documents in the form of multi-hop machine reading comprehension. We introduced a knowledge fusion system to obtain the complete nature of drugs and proteins and exploited a graph reasoning system to infer the drugs and proteins contained in the documents. Our model significantly improves the performance compared to previous state-of-the-art models on the QANGAROO MedHop dataset, which obtained a 4.5% improvement in terms of DDIs prediction accuracy.
translated by 谷歌翻译
Robot developers develop various types of robots for satisfying users' various demands. Users' demands are related to their backgrounds and robots suitable for users may vary. If a certain developer would offer a robot that is different from the usual to a user, the robot-specific software has to be changed. On the other hand, robot-software developers would like to reuse their developed software as much as possible to reduce their efforts. We propose the system design considering hardware-level reusability. For this purpose, we begin with the learning-from-observation framework. This framework represents a target task in robot-agnostic representation, and thus the represented task description can be shared with various robots. When executing the task, it is necessary to convert the robot-agnostic description into commands of a target robot. To increase the reusability, first, we implement the skill library, robot motion primitives, only considering a robot hand and we regarded that a robot was just a carrier to move the hand on the target trajectory. The skill library is reusable if we would like to the same robot hand. Second, we employ the generic IK solver to quickly swap a robot. We verify the hardware-level reusability by applying two task descriptions to two different robots, Nextage and Fetch.
translated by 谷歌翻译
The black-box nature of end-to-end speech translation (E2E ST) systems makes it difficult to understand how source language inputs are being mapped to the target language. To solve this problem, we would like to simultaneously generate automatic speech recognition (ASR) and ST predictions such that each source language word is explicitly mapped to a target language word. A major challenge arises from the fact that translation is a non-monotonic sequence transduction task due to word ordering differences between languages -- this clashes with the monotonic nature of ASR. Therefore, we propose to generate ST tokens out-of-order while remembering how to re-order them later. We achieve this by predicting a sequence of tuples consisting of a source word, the corresponding target words, and post-editing operations dictating the correct insertion points for the target word. We examine two variants of such operation sequences which enable generation of monotonic transcriptions and non-monotonic translations from the same speech input simultaneously. We apply our approach to offline and real-time streaming models, demonstrating that we can provide explainable translations without sacrificing quality or latency. In fact, the delayed re-ordering ability of our approach improves performance during streaming. As an added benefit, our method performs ASR and ST simultaneously, making it faster than using two separate systems to perform these tasks.
translated by 谷歌翻译
机器学习(ML)与高能物理学(HEP)的快速发展的交集给我们的社区带来了机会和挑战。远远超出了标准ML工具在HEP问题上的应用,这两个领域的一代人才素养正在开发真正的新的和潜在的革命性方法。迫切需要支持跨学科社区推动这些发展的需求,包括在这两个领域的交汇处为专门研究提供资金,在大学投资高性能计算以及调整分配政策以支持这项工作,开发社区工具和标准,并为年轻研究人员提供教育和职业道路,从而吸引了机器学习的智力活力,以吸引高能量物理学。
translated by 谷歌翻译
众所周知,大数据挖掘是数据科学的重要任务,因为它可以提供有用的观察结果和隐藏在给定的大数据集中的新知识。基于接近性的数据分析尤其在许多现实生活中使用。在这样的分析中,通常采用了与K最近的邻居的距离,因此其主瓶颈来自数据检索。为提高这些分析的效率做出了许多努力。但是,他们仍然会产生巨大的成本,因为它们基本上需要许多数据访问。为了避免此问题,我们提出了一种机器学习技术,该技术可以快速准确地估算给定查询的K-NN距离(即与K最近的邻居的距离)。我们训练完全连接的神经网络模型,并利用枢轴来实现准确的估计。我们的模型旨在具有有用的优势:它一次不距离K-NN,其推理时间为O(1)(未产生数据访问),但保持高精度。我们对实际数据集的实验结果和案例研究证明了解决方案的效率和有效性。
translated by 谷歌翻译
本文提出了一种通过视觉解释3D卷积神经网络(CNN)的决策过程的方法,并具有闭塞灵敏度分析的时间扩展。这里的关键思想是在输入3D时间空间数据空间中通过3D掩码遮住特定的数据,然后测量输出评分中的变更程度。产生较大变化程度的遮挡体积数据被认为是分类的更关键元素。但是,虽然通常使用遮挡敏感性分析来分析单个图像分类,但将此想法应用于视频分类并不是那么简单,因为简单的固定核心无法处理动作。为此,我们将3D遮挡掩模的形状调整为目标对象的复杂运动。通过考虑从输入视频数据中提取的光流的时间连续性和空间共存在,我们的灵活面膜适应性进行了。我们进一步建议通过使用分数的一阶部分导数相对于输入图像来降低其计算成本,以近似我们的方法。我们通过与删除/插入度量的常规方法和UCF-101上的指向度量来证明我们方法的有效性。该代码可在以下网址获得:https://github.com/uchiyama33/aosa。
translated by 谷歌翻译